Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2019/09.10.14.31
%2 sid.inpe.br/sibgrapi/2019/09.10.14.31.42
%@doi 10.1109/SIBGRAPI.2019.00039
%T Brain extraction network trained with “silver standard” data and fine-tuned with manual annotation for improved segmentation
%D 2019
%A Souza, Roberto,
%A Lucena, Oeslle,
%A Bento, Mariana,
%A Garrafa, Julia,
%A Rittner, Letícia,
%A Appenzeller, Simone,
%A Lotufo, Roberto,
%A Frayne, Richard,
%@affiliation University of Calgary
%@affiliation King’s College London
%@affiliation University of Calgary
%@affiliation University of Campinas
%@affiliation University of Campinas
%@affiliation University of Campinas
%@affiliation University of Campinas
%@affiliation University of Calgary
%E Oliveira, Luciano Rebouças de,
%E Sarder, Pinaki,
%E Lage, Marcos,
%E Sadlo, Filip,
%B Conference on Graphics, Patterns and Images, 32 (SIBGRAPI)
%C Rio de Janeiro, RJ, Brazil
%8 28-31 Oct. 2019
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K skull-stripping, brain extraction, MRI, segmentation.
%X Training convolutional neural networks (CNNs) for medical image segmentation often requires large and representative sets of images and their corresponding annotations. Obtaining annotated images usually requires manual intervention, which is expensive and time consuming, as it typically requires a specialist. An alternative approach is to leverage existing automatic segmentation tools and combine them to create consensus-based silver-standards annotations. A drawback to this approach is that silver-standards are usually smooth and this smoothness is transmitted to the output segmentation of the network. Our proposal is to use a two-staged approach. First, silver-standard datasets are used to generate a large set of annotated images in order to train the brain extraction network from scratch. Second, fine-tuning is performed using much smaller amounts of manually annotated data so that the network can learn the finer details that are not preserved in the silver-standard data. As an example, our two-staged brain extraction approach has been shown to outperform seven stateof- the-art techniques across three different public datasets. Our results also suggest that CNNs can potentially capture inter-rater annotation variability between experts who annotate the same set of images following the same guidelines, and also adapt to different annotation guidelines.
%@language en
%3 SIBGRAPI_Skull_stripping_Fine_tuning.pdf


Close